262 research outputs found

    The latency for correcting a movement depends on the visual attribute that defines the target

    Get PDF
    Neurons in different cortical visual areas respond to different visual attributes with different latencies. How does this affect the on-line control of our actions? We studied hand movements directed toward targets that could be distinguished from other objects by luminance, size, orientation, color, shape or texture. In some trials, the target changed places with one of the other objects at the onset of the hand’s movement. We determined the latency for correcting the movement of the hand in the direction of the new target location. We show that subjects can correct their movements at short latency for all attributes, but that responses for the attributes color, form and texture (that are relevant for recognizing the object) are 50 ms slower than for the attributes luminance, orientation and size. This dichotomy corresponds to both to the distinction between magno-cellular and parvo-cellular pathways and to a dorsal–ventral distinction. The latency also differed systematically between subjects, independent of their reaction time

    How people achieve their amazing temporal precision in interception

    Get PDF
    People can hit rapidly moving balls with amazing precision. To determine how they manage to do so, we explored how various factors that we could manipulate influenced people's precision when intercepting virtual targets. We found that temporal precision was highest for fast targets that subjects were free to intercept wherever they wished. Temporal precision was much poorer when the point of interception was specified in advance. Examining responses to abrupt perturbations of the target's motion revealed that people adjusted where rather than when they would hit the target if given the choice. A model that combines judging how long it will take to reach the target's path with estimating the target's position at that time from its visually perceived position and velocity could account for the observed precision with reasonable values for all the parameters. The model considers all relevant sources of errors, together with the delays with which the various aspects can be adjusted. Our analysis provides a biologically plausible explanation for how light falling on the eye can guide the hand to intercept a moving ball with such high precision

    Does planning a different trajectory influence the choice of grasping points?

    Get PDF
    We examined whether the movement path is considered when selecting the positions at which the digits will contact the object's surface (grasping points). Subjects grasped objects of different heights but with the same radius at various locations on a table. At some locations, one digit crossed to the side of the object opposite of where it started. In doing so, it moved over a short object whereas it curved around a tall object. This resulted in very different paths for different objects. Importantly, the selection of grasping points was unaffected. That subjects do not appear to consider the path when selecting grasping points suggests that the grasping points are selected before planning the movements towards those points. © 2010 The Author(s)

    The tangent space of a bundle

    Get PDF
    In dynamic environments, it is crucial to accurately consider the timing of information. For instance, during saccades the eyes rotate so fast that even small temporal errors in relating retinal stimulation by flashed stimuli to extra-retinal information about the eyes' orientations will give rise to substantial errors in where the stimuli are judged to be. If spatial localization involves judging the eyes' orientations at the estimated time of the flash, we should be able to manipulate the pattern of mislocalization by altering the estimated time of the flash. We reasoned that if we presented a relevant flash within a short rapid sequence of irrelevant flashes, participants' estimates of when the relevant flash was presented might be shifted towards the centre of the sequence. In a first experiment, we presented five bars at different positions around the time of a saccade. Four of the bars were black. Either the second or the fourth bar in the sequence was red. The task was to localize the red bar. We found that when the red bar was presented second in the sequence, it was judged to be further in the direction of the saccade than when it was presented fourth in the sequence. Could this be because the red bar was processed faster when more black bars preceded it? In a second experiment, a red bar was either presented alone or followed by two black bars. When two black bars followed it, it was judged to be further in the direction of the saccade. We conclude that the spatial localization of flashed stimuli involves judging the eye orientation at the estimated time of the flash

    The effect of variability in other objects' sizes on the extent to which people rely on retinal image size as a cue for judging distance

    Get PDF
    Retinal image size can be used to judge objects' distances because for any object one can assume that some sizes are more likely than others. It has been shown that an increased variability in the size of otherwise identical target objects over trials reduces the weight given to retinal image size as a distance cue. Here, we examined whether an increased variability in the size of objects of a different color, orientation, or shape reduces the weight given to retinal image size when judging distance. Subjects had to indicate the 3D position of a simulated target object. Retinal image size was given significantly less weight as a cue for judging the target cube's distance when differently colored and differently oriented target objects appeared in many simulated sizes but not when differently shaped objects had many simulated sizes. We also examined whether increasing the variability in the size of cubes in the surroundings reduces the weight given to retinal image size when judging distance. It does not. We conclude that variability in surrounding or dissimilar objects' sizes has a negligible influence on the extent to which people rely on retinal image size as a cue for judging distance

    Online updating of obstacle positions when intercepting a virtual target

    Get PDF
    People rely upon sensory information in the environment to guide their actions. Ongoing goal-directed arm movements are constantly adjusted to the latest estimate of both the target and hand's positions. Does the continuous guidance of ongoing arm movements also consider the latest visual information of the position of obstacles in the surrounding? To find out, we asked participants to slide their finger across a screen to intercept a laterally moving virtual target while moving through a gap that was created by two virtual circular obstacles. At a fixed time during each trial, the target suddenly jumped slightly laterally while still continuing to move. In half the trials, the size of the gap changed at the same moment as the target jumped. As expected, participants adjusted their movements in response to the target jump. Importantly, the magnitude of this response depended on the new size of the gap. If participants were told that the circles were irrelevant, changing the gap between them had no effect on the responses. This shows that obstacles' instantaneous positions can be considered when visually guiding goal-directed movements

    Spatial contextual cues that help predict how a target will accelerate can be used to guide interception

    Get PDF
    Objects in one's environment do not always move at a constant velocity but often accelerate or decelerate. People are very poor at visually judging acceleration and normally make systematic errors when trying to intercept accelerating objects. If the acceleration is perpendicular to the direction of motion, it gives rise to a curved path. Can spatial contextual cues help one predict such accelerations and thereby help interception? To answer this question, we asked participants to hit a target that moved as if it were attached to a rolling disk, like a valve (target) on a bicycle wheel (disk) moves when cycling: constantly accelerating toward the wheel's center. On half the trials, the disk was visible such that participants could use the spatial relations between the target and the rolling disk to guide their interception. On the other half, the disk was not visible, so participants had no help in predicting the target's complicated pattern of accelerations and decelerations. Importantly, the target's path was the same in both cases. Participants hit more targets when the disk was visible than when it was invisible, even when using a strategy that can compensate for neglecting acceleration. We conclude that spatial contextual cues that help predict the target's accelerations can help intercept it

    Random walk of motor planning in task-irrelevant dimensions

    Get PDF
    The movements that we make are variable. It is well established that at least a part of this variability is caused by noise in central motor planning. Here, we studied how the random effects of planning noise translate into changes in motor planning. Are the random effects independently added to a constant mean end point, or do they accumulate over movements? To distinguish between these possibilities, we examined repeated, discrete movements in various tasks in which the motor output could be decomposed into a task-relevant and a task-irrelevant component. We found in all tasks that the task-irrelevant component had a positive lag 1 autocorrelation, suggesting that the random effects of planning noise accumulate over movements. In contrast, the task-relevant component always had a lag 1 autocorrelation close to zero, which can be explained by effective trial-by-trial correction of motor planning on the basis of observed motor errors. Accumulation of the effects of planning noise is consistent with current insights into the stochastic nature of synaptic plasticity. It leads to motor exploration, which may subserve motor learning and performance optimization

    Moving the Weber fraction: the perceptual precision for moment of inertia increases with exploration force

    Get PDF
    How does the magnitude of the exploration force influence the precision of haptic perceptual estimates? To address this question, we examined the perceptual precision for moment of inertia (i.e., an object’s ‘‘angular mass’’) under different force conditions, using the Weber fraction to quantify perceptual precision. Participants rotated a rod around a fixed axis and judged its moment of inertia in a two-alternative forced-choice task. We instructed different levels of exploration force, thereby manipulating the magnitude of both the exploration force and the angular acceleration. These are the two signals that are needed by the nervous system to estimate moment of inertia. Importantly, one can assume that the absolute noise on both signals increases with an increase in the signals’ magnitudes, while the relative noise (i.e., noise/signal) decreases with an increase in signal magnitude. We examined how the perceptual precision for moment of inertia was affected by this neural noise. In a first experiment we found that a low exploration force caused a higher Weber fraction (22%) than a high exploration force (13%), which suggested that the perceptual precision was constrained by the relative noise. This hypothesis was supported by the result of a second experiment, in which we found that the relationship between exploration force and Weber fraction had a similar shape as the theoretical relationship between signal magnitude and relative noise. The present study thus demonstrated that the amount of force used to explore an object can profoundly influence the precision by which its properties are perceived

    Eye–hand coupling is not the cause of manual return movements when searching

    Get PDF
    When searching for a target with eye movements, saccades are planned and initiated while the visual information is still being processed, so that subjects often make saccades away from the target and then have to make an additional return saccade. Presumably, the cost of the additional saccades is outweighed by the advantage of short fixations. We previously showed that when the cost of passing the target was increased, by having subjects manually move a window through which they could see the visual scene, subjects still passed the target and made return movements (with their hand). When moving a window in this manner, the eyes and hand follow the same path. To find out whether the hand still passes the target and then returns when eye and hand movements are uncoupled, we here compared moving a window across a scene with moving a scene behind a stationary window. We ensured that the required movement of the hand was identical in both conditions. Subjects found the target faster when moving the window across the scene than when moving the scene behind the window, but at the expense of making larger return movements. The relationship between the return movements and movement speed when comparing the two conditions was the same as the relationship between these two when comparing different window sizes. We conclude that the hand passing the target and then returning is not directly related to the eyes doing so, but rather that moving on before the information has been fully processed is a general principle of visuomotor control
    corecore